Goto

Collaborating Authors

 training quantized net


Training Quantized Nets: A Deeper Understanding

Neural Information Processing Systems

Currently, deep neural networks are deployed on low-power portable devices by first training a full-precision model using powerful hardware, and then deriving a corresponding low-precision model for efficient inference on such systems. However, training models directly with coarsely quantized weights is a key step towards learning on embedded platforms that have limited computing resources, memory capacity, and power consumption. Numerous recent publications have studied methods for training quantized networks, but these studies have mostly been empirical. In this work, we investigate training methods for quantized neural networks from a theoretical viewpoint. We first explore accuracy guarantees for training methods under convexity assumptions. We then look at the behavior of these algorithms for non-convex problems, and show that training algorithms that exploit high-precision representations have an important greedy search phase that purely quantized training methods lack, which explains the difficulty of training using low-precision arithmetic.


Reviews: Training Quantized Nets: A Deeper Understanding

Neural Information Processing Systems

This papers investigates theoretically and numerically why the recent BinaryConnect (BC) works better in comparison to more traditional rounding schemes, such as Stochastic Rounding (SR). It proves that for convex functions, (the continuous weights in) BC can converge to the global minimum, while SR methods fair less well. Also, it is proven that, below a certain value, learning in SR is unaffected by decreasing the learning rate, except that the learning process is slowed down. The paper is, to the best of my understanding: 1) Clear, modulo the issues below. Specifically, I think it helps clarify why is it so hard to train with SR over BC (it would be extremely useful if one could use SR, since then there would be any need to store the full precision weights during training).


Training Quantized Nets: A Deeper Understanding

Li, Hao, De, Soham, Xu, Zheng, Studer, Christoph, Samet, Hanan, Goldstein, Tom

Neural Information Processing Systems

Currently, deep neural networks are deployed on low-power portable devices by first training a full-precision model using powerful hardware, and then deriving a corresponding low-precision model for efficient inference on such systems. However, training models directly with coarsely quantized weights is a key step towards learning on embedded platforms that have limited computing resources, memory capacity, and power consumption. Numerous recent publications have studied methods for training quantized networks, but these studies have mostly been empirical. In this work, we investigate training methods for quantized neural networks from a theoretical viewpoint. We first explore accuracy guarantees for training methods under convexity assumptions.